Goto

Collaborating Authors

 computational operation


Series of quasi-uniform scatterings with fast search, root systems and neural network classifications

Netay, Igor V.

arXiv.org Artificial Intelligence

In this paper we describe an approach to construct large extendable collections of vectors in predefined spaces of given dimensions. These collections are useful for neural network latent space configuration and training. For classification problem with large or unknown number of classes this allows to construct classifiers without classification layer and extend the number of classes without retraining of network from the very beginning. The construction allows to create large well-spaced vector collections in spaces of minimal possible dimension. If the number of classes is known or approximately predictable, one can choose sufficient enough vector collection size. If one needs to significantly extend the number of classes, one can extend the collection in the same latent space, or to incorporate the collection into collection of higher dimensions with same spacing between vectors. Also, regular symmetric structure of constructed vector collections can significantly simplify problems of search for nearest cluster centers or embeddings in the latent space. Construction of vector collections is based on combinatorics and geometry of semi-simple Lie groups irreducible representations with highest weight.


Algorithms and data structures for automatic precision estimation of neural networks

Netay, Igor V.

arXiv.org Artificial Intelligence

We describe algorithms and data structures to extend a neural network library with automatic precision estimation for floating point computations. We also discuss conditions to make estimations exact and preserve high computation performance of neural networks training and inference. Numerical experiments show the consequences of significant precision loss for particular values such as inference, gradients and deviations from mathematically predicted behavior. It turns out that almost any neural network accumulates computational inaccuracies. As a result, its behavior does not coincide with predicted by the mathematical model of neural network. This shows that tracking of computational inaccuracies is important for reliability of inference, training and interpretability of results.


Factored Task and Motion Planning with Combined Optimization, Sampling and Learning

Ortiz-Haro, Joaquim

arXiv.org Artificial Intelligence

In this thesis, we aim to improve the performance of TAMP algorithms from three complementary perspectives. First, we investigate the integration of discrete task planning with continuous trajectory optimization. Our main contribution is a conflict-based solver that automatically discovers why a task plan might fail when considering the constraints of the physical world. This information is then fed back into the task planner, resulting in an efficient, bidirectional, and intuitive interface between task and motion, capable of solving TAMP problems with multiple objects, robots, and tight physical constraints. In the second part, we first illustrate that, given the wide range of tasks and environments within TAMP, neither sampling nor optimization is superior in all settings. To combine the strengths of both approaches, we have designed meta-solvers for TAMP, adaptive solvers that automatically select which algorithms and computations to use and how to best decompose each problem to find a solution faster. In the third part, we combine deep learning architectures with model-based reasoning to accelerate computations within our TAMP solver. Specifically, we target infeasibility detection and nonlinear optimization, focusing on generalization, accuracy, compute time, and data efficiency. At the core of our contributions is a refined, factored representation of the trajectory optimization problems inside TAMP. This structure not only facilitates more efficient planning, encoding of geometric infeasibility, and meta-reasoning but also provides better generalization in neural architectures.


Are Advanced FPGAs the Activators of Smarter AI Features?

#artificialintelligence

FPGAs' ability to distribute massive workloads into parallel computation enables AI features to create highly efficient electronic devices. FREMONT, CA: Implementing FPGA increases the number of parallel computational elements and processing efficiency of the electronic devices. FPGAs that hold parallel and hardware-programmable feature enables electronics excellence at specialized workloads with high computational operations and optimal configurations. Over the past few years, FPGAs have proved to be the low-power solution, making it flexible and ideal for Neural Network (NN) architectures. Today, professionals are focusing on creating designs for supporting AI-based applications and functions.


The role of Artificial Intelligence today - Vents Magazine

#artificialintelligence

Artificial Intelligence (AI) is probably one of the branches of Computer Science that is experiencing more growth nowadays. Even though it was born more than 70 years ago, it is in a historical period where it has generated more interest because of the revolution it has caused on the market today. Until very recently, there were limited computational capacities that made Artificial Intelligence produce very poor results on the problem being applied, which resulted in several periods of historical dissatisfaction in the industry and considerable reduction in both interests. In this case, discipline on the number of dedicated researchers. However, in recent years Artificial Intelligence has gained enormous momentum, able to solve problems with computers that were previously thought to be impossible, reaching levels that had never been reached before.


Flexible Approach for Computer-Assisted Reading and Analysis of Texts

Biskri, Ismaïl (Universié du Québec à Trois-Rivières) | Hassani, Mohamed (Universié du Québec à Trois-Rivières)

AAAI Conferences

A Computer-Assisted Reading and Analysis of Texts (CARAT) process is a complex technology that connects language, text, information and knowledge theories with computational formalizations, statistical approaches, symbolic approaches, standard and non-standard logics, etc. This process should be, always, under the control of the user according to his subjectivity, his knowledge and the purpose of his analysis. It becomes important to design platforms to support the design of CARAT tools, their management, their adaptation to new needs and the experiments. Even, in the last years, several platforms for digging data, including textual data have emerged; they lack flexibility and sound formal foundations. We propose, in this paper, a formal model with strong logical foundations, based on typed applicative systems.


Stop Saying the Brain Learns By Rewiring Itself - Facts So Romantic

Nautilus

Most neuroscientists believe that the brain learns by rewiring itself--by changing the strength of connections between brain cells, or neurons. But experimental results published a few years ago, from a lab at Lund University in Sweden, hint that we need to change our approach. They suggest the brain learns in a way more analogous to that of a computer: It encodes information into molecules inside neurons and reads out that information for use in computational operations. With a computer scientist, Adam King, I co-authored a book, Memory and the Computational Brain: Why Cognitive Science Will Transform Neuroscience. We argued that well-established results in cognitive science and computer science imply that computation in the brain must resemble computation in a computer in just this way.


The Future of Search and Discovery in Big Data Analytics: Ultrametric Information Spaces

Murtagh, Fionn, Contreras, Pedro

arXiv.org Machine Learning

Under the heading of "Addressing the big data challenge", the European 7th Framework Programme sees the issue thus (see INFSO, 2012): "Recent industry reports detail how data volumes are growing at a faster rate than our ability to interpret and exploit them for innovative ICT applications, for decision support, planning, monitoring, control and interaction. This includes unstructured data types such as video, audio, images and free text as well as structured data types such as database records, sensor readings and 3D. While each of these types requires some specific form of processing and analytics, many of the general principles for managing and storing them at extreme scales are common across all of them." Analytics tool capability is called for, to address these burgeoning issues in the data intensive industries, to support "effective policy making and implementation" of public bodies resulting in "significant annual savings from 1 Big Data applications", and also to exploit open, linked data - "foster the reuse of public sector information and strengthen other open data activities linked to commercial exploitation." The "big data" marketplace is stated to be potentially worth approximately USD 600 billion. To address the challenges of search and discovery in massive and complex data sets and data flows, it is our contention in this work that we must move to an appropriate topology - to an appropriate framework such that computation is greatly facilitated. Our work is all about empowering those who are involved in data analytics, through clustering and related algorithms, to face these new challenges. Scalability and interactivity are two of the performance issues that follow directly from clustering algorithms, for search, retrieval and discovery, that are of linear computational complexity or better (logarithmic, or constant).